47 research outputs found

    Nonparametric estimation of the characteristic triplet of a discretely observed L\'evy process

    Get PDF
    Given a discrete time sample X1,...XnX_1,... X_n from a L\'evy process X=(Xt)t0X=(X_t)_{t\geq 0} of a finite jump activity, we study the problem of nonparametric estimation of the characteristic triplet (γ,σ2,ρ)(\gamma,\sigma^2,\rho) corresponding to the process X.X. Based on Fourier inversion and kernel smoothing, we propose estimators of γ,σ2\gamma,\sigma^2 and ρ\rho and study their asymptotic behaviour. The obtained results include derivation of upper bounds on the mean square error of the estimators of γ\gamma and σ2\sigma^2 and an upper bound on the mean integrated square error of an estimator of ρ.\rho.Comment: 29 page

    Nonparametric inference for discretely sampled L\'evy processes

    Get PDF
    Given a sample from a discretely observed L\'evy process X=(Xt)t0X=(X_t)_{t\geq 0} of the finite jump activity, the problem of nonparametric estimation of the L\'evy density ρ\rho corresponding to the process XX is studied. An estimator of ρ\rho is proposed that is based on a suitable inversion of the L\'evy-Khintchine formula and a plug-in device. The main results of the paper deal with upper risk bounds for estimation of ρ\rho over suitable classes of L\'evy triplets. The corresponding lower bounds are also discussed.Comment: 38 page

    Bayesian linear inverse problems in regularity scales

    Full text link
    We obtain rates of contraction of posterior distributions in inverse problems defined by scales of smoothness classes. We derive abstract results for general priors, with contraction rates determined by Galerkin approximation. The rate depends on the amount of prior concentration near the true function and the prior mass of functions with inferior Galerkin approximation. We apply the general result to non-conjugate series priors, showing that these priors give near optimal and adaptive recovery in some generality, Gaussian priors, and mixtures of Gaussian priors, where the latter are also shown to be near optimal and adaptive. The proofs are based on general testing and approximation arguments, without explicit calculations on the posterior distribution. We are thus not restricted to priors based on the singular value decomposition of the operator. We illustrate the results with examples of inverse problems resulting from differential equations.Comment: 34 page

    Parametric inference for stochastic differential equations: a smooth and match approach

    Get PDF
    We study the problem of parameter estimation for a univariate discretely observed ergodic diffusion process given as a solution to a stochastic differential equation. The estimation procedure we propose consists of two steps. In the first step, which is referred to as a smoothing step, we smooth the data and construct a nonparametric estimator of the invariant density of the process. In the second step, which is referred to as a matching step, we exploit a characterisation of the invariant density as a solution of a certain ordinary differential equation, replace the invariant density in this equation by its nonparametric estimator from the smoothing step in order to arrive at an intuitively appealing criterion function, and next define our estimator of the parameter of interest as a minimiser of this criterion function. Our main results show that under suitable conditions our estimator is n\sqrt{n}-consistent, and even asymptotically normal. We also discuss a way of improving its asymptotic performance through a one-step Newton-Raphson type procedure and present results of a small scale simulation study.Comment: 26 page

    Deconvolution for an atomic distribution

    Get PDF
    Let X1,...,XnX_1,...,X_n be i.i.d. observations, where Xi=Yi+σZiX_i=Y_i+\sigma Z_i and YiY_i and ZiZ_i are independent. Assume that unobservable YY's are distributed as a random variable UV,UV, where UU and VV are independent, UU has a Bernoulli distribution with probability of zero equal to pp and VV has a distribution function FF with density f.f. Furthermore, let the random variables ZiZ_i have the standard normal distribution and let σ>0.\sigma>0. Based on a sample X1,...,Xn,X_1,..., X_n, we consider the problem of estimation of the density ff and the probability p.p. We propose a kernel type deconvolution estimator for ff and derive its asymptotic normality at a fixed point. A consistent estimator for pp is given as well. Our results demonstrate that our estimator behaves very much like the kernel type deconvolution estimator in the classical deconvolution problem.Comment: Published in at http://dx.doi.org/10.1214/07-EJS121 the Electronic Journal of Statistics (http://www.i-journals.org/ejs/) by the Institute of Mathematical Statistics (http://www.imstat.org

    A non-parametric Bayesian approach to decompounding from high frequency data

    Full text link
    Given a sample from a discretely observed compound Poisson process, we consider non-parametric estimation of the density f0f_0 of its jump sizes, as well as of its intensity λ0.\lambda_0. We take a Bayesian approach to the problem and specify the prior on f0f_0 as the Dirichlet location mixture of normal densities. An independent prior for λ0\lambda_0 is assumed to be compactly supported and to possess a positive density with respect to the Lebesgue measure. We show that under suitable assumptions the posterior contracts around the pair (λ0,f0)(\lambda_0,f_0) at essentially (up to a logarithmic factor) the nΔ\sqrt{n\Delta}-rate, where nn is the number of observations and Δ\Delta is the mesh size at which the process is sampled. The emphasis is on high frequency data, Δ0\Delta\to 0, but the obtained results are also valid for fixed Δ\Delta. In either case we assume that nΔn\Delta\rightarrow\infty. Our main result implies existence of Bayesian point estimates converging (in the frequentist sense, in probability) to (λ0,f0)(\lambda_0,f_0) at the same rate. We also discuss a practical implementation of our approach. The computational problem is dealt with by inclusion of auxiliary variables and we develop a Markov Chain Monte Carlo algorithm that samples from the joint distribution of the unknown parameters in the mixture density and the introduced auxiliary variables. Numerical examples illustrate the feasibility of this approach
    corecore